16 research outputs found
Does Character-level Information Always Improve DRS-based Semantic Parsing?
Even in the era of massive language models, it has been suggested that
character-level representations improve the performance of neural models. The
state-of-the-art neural semantic parser for Discourse Representation Structures
uses character-level representations, improving performance in the four
languages (i.e., English, German, Dutch, and Italian) in the Parallel Meaning
Bank dataset. However, how and why character-level information improves the
parser's performance remains unclear. This study provides an in-depth analysis
of performance changes by order of character sequences. In the experiments, we
compare F1-scores by shuffling the order and randomizing character sequences
after testing the performance of character-level information. Our results
indicate that incorporating character-level information does not improve the
performance in English and German. In addition, we find that the parser is not
sensitive to correct character order in Dutch. Nevertheless, performance
improvements are observed when using character-level information.Comment: 10 pages. To appear in the 12th Joint Conference on Lexical and
Computational Semantics (*SEM 2023) with ACL202
Constructing Multilingual Code Search Dataset Using Neural Machine Translation
Code search is a task to find programming codes that semantically match the
given natural language queries. Even though some of the existing datasets for
this task are multilingual on the programming language side, their query data
are only in English. In this research, we create a multilingual code search
dataset in four natural and four programming languages using a neural machine
translation model. Using our dataset, we pre-train and fine-tune the
Transformer-based models and then evaluate them on multiple code search test
sets. Our results show that the model pre-trained with all natural and
programming language data has performed best in most cases. By applying
back-translation data filtering to our dataset, we demonstrate that the
translation quality affects the model's performance to a certain extent, but
the data size matters more.Comment: To appear in the Proceedings of the ACL2023 Student Research Workshop
(SRW